Digital infinity

Digital infinity is a technical term in theoretical linguistics. Alternative formulations are 'discrete infinity' and 'the infinite use of finite means'. The idea is that all human languages follow a simple logical principle, according to which a limited set of digits — irreducible atomic sound elements — are combined to produce an infinite range of potentially meaningful expressions.

Language is, at its core, a system that is both digital and infinite. To my knowledge, there is no other biological system with these properties....

Noam Chomsky.[1] 

‘It remains for us to examine the spiritual element of speech ... this marvelous invention of composing from twenty-five or thirty sounds an infinite variety of words, which, although not having any resemblance in themselves to that which passes through our minds, nevertheless do not fail to reveal to others all of the secrets of the mind, and to make intelligible to others who cannot penetrate into the mind all that we conceive and all of the diverse movements of our souls.’

Antoine Arnauld and Claude Lancelot.[2] 

Noam Chomsky cites Galileo as perhaps the first to recognise the significance of digital infinity. This principle, notes Chomsky, is "the core property of human language, and one of its most distinctive properties: the use of finite means to express an unlimited array of thoughts". In his Dialogo, Galileo describes with wonder the discovery of a means to communicate one’s “most secret thoughts to any other person ... with no greater difficulty than the various collocations of twenty-four little characters upon a paper.” This is the greatest of all human inventions, Galileo continues, comparable to the creations of a Michelangelo…[1]

The computational theory of mind

'Digital infinity' corresponds to Noam Chomsky's 'Universal Grammar' mechanism, conceived as a computational module inserted somehow into Homo sapiens' otherwise 'messy' (non-digital) brain. This conception of human cognition — central to the so-called 'cognitive revolution' of the 1950s and 1960s — is generally attributed to Alan Turing, who was the first scientist to argue that a man-made machine might truly be said to 'think'. The idea of a thinking machine had previously been considered absurd, having been famously dismissed by Réné Descartes as theoretically impossible. Neither animals nor machines can think, insisted Descartes, since they lack a God-given soul.[3] Turing was well aware of this traditional theological objection, and explicitly countered it.[4]

Today's digital computers are instantiations of Turing's theoretical breakthrough in conceiving the possibility of a man-made universal thinking machine — known nowadays as a 'Turing machine'. No physical mechanism can be intrinsically 'digital', Turing explained, since — examined closely enough — its possible states will vary without limit. But if most of these states can be profitably ignored, leaving only a limited set of relevant ones, then functionally the machine may be considered 'digital':[4]

The digital computers considered in the last section may be classified amongst the "discrete-state machines." These are the machines which move by sudden jumps or clicks from one quite definite state to another. These states are sufficiently different for the possibility of confusion between them to be ignored. Strictly speaking there, are no such machines. Everything really moves continuously. But there are many kinds of machine which can profitably be thought of as being discrete-state machines. For instance in considering the switches for a lighting system it is a convenient fiction that each switch must be definitely on or definitely off. There must be intermediate positions, but for most purposes we can forget about them.

Alan Turing.[5] 

An implication is that 'digits' don't exist: they and their combinations are no more than convenient fictions, operating on a level quite independent of the material, physical world. In the case of a binary digital machine, the choice at each point is restricted to 'off' versus 'on'. Crucially, the intrinsic properties of the medium used to encode signals then have no effect on the message conveyed. 'Off' (or alternatively 'on') remains unchanged regardless of whether the signal consists of smoke, electricity, sound, light or anything else. In the case of analog (more-versus-less) gradations, this is not so because the range of possible settings is unlimited. Moreover, it does matter which particular medium is being employed: equating a certain intensity of smoke with a corresponding intensity of light, sound or electricity is just not possible. In other words, only in the case of digital computation and communication can information be truly independent of the physical, chemical or other properties of the materials used to encode and transmit messages.

Digital computation and communication operates, then, independently of the physical properties of the computing machine. As scientists and philosophers during the 1950s digested the implications, they exploited the insight to explain why 'mind' apparently operates on so different a level from 'matter'. Descartes' celebrated distinction between immortal 'soul' and mortal 'body' was conceptualised, following Turing, as no more than the distinction between (digitally encoded) information on the one hand, and, on the other, the particular physical medium — light, sound, electricity or whatever — chosen to transmit the corresponding signals. Note that the Cartesian assumption of mind's independence of matter implied — in the human case at least — the existence of some kind of digital computer operating inside the human brain.

Information and computation reside in patterns of data and in relations of logic that are independent of the physical medium that carries them. When you telephone your mother in another city, the message stays the same as it goes from your lips to her ears even as it physically changes its form, from vibrating air, to electricity in a wire, to charges in silicon, to flickering light in a fibre optic cable, to electromagnetic waves, and then back again in reverse order. … Likewise, a given programme can run on computers made of vacuum tubes, electromagnetic switches, transistors, integrated circuits, or well-trained pigeons, and it accomplishes the same things for the same reasons. This insight, first expressed by the mathematician Alan Turing, the computer scientists Alan Newell, Herbert Simon, and Marvin Minsky, and the philosophers Hilary Putnam and Jerry Fodor, is now called the computational theory of mind. It is one of the great ideas in intellectual history, for it solves one of the puzzles that make up the ‘mind-body problem’, how to connect the ethereal world of meaning and intention, the stuff of our mental lives, with a physical hunk of matter like the brain. … For millennia this has been a paradox. … The computational theory of mind resolves the paradox.

Steven Pinker.[6] 

A digital apparatus

Turing did not claim that the human mind really is a digital computer. More modestly, he proposed that digital computers might one day qualify in human eyes as machines endowed with "mind". However, it was not long before philosophers (most notably Hilary Putnam) took what seemed to be the next logical step — arguing that the human mind itself is a digital computer, or at least that certain mental "modules" are best understood that way.

Noam Chomsky rose to prominence as one of the most audacious champions of this 'cognitive revolution'. Language, he proposed, is a computational 'module' or 'device' unique to the human brain. Previously, linguists had thought of language as learned cultural behaviour: chaotically variable, inseparable from social life and therefore beyond the remit of natural science. The Swiss linguist Ferdinand de Saussure, for example, had defined linguistics as a branch of 'semiotics', this in turn being inseparable from anthropology, sociology and the study of man-made conventions and institutions. By picturing language instead as the natural mechanism of 'digital infinity', Chomsky promised to bring scientific rigour to linguistics as a branch of strictly natural science.

In the 1950s, phonology was generally considered the most rigorously scientific branch of linguistics. For phonologists, "digital infinity" was made possible by the human vocal apparatus conceptualised as a kind of machine consisting of a small number of binary switches. For example, "voicing" could be switched 'on' or 'off', as could palatisation, nasalisation and so forth. Take the consonant [b], for example, and switch voicing to the 'off' position — and you get [p]. Every possible phoneme in any of the world's languages might in this way be generated by specifying a particular on/off configuration of the switches ('articulators') constituting the human vocal apparatus. This approach became celebrated as 'distinctive features' theory, in large part credited to the Russian linguist and polymath Roman Jakobson. The basic idea was that every phoneme in every natural language could in principle be reduced to its irreducible atomic components — a set of 'on' or 'off' choices ('distinctive features') allowed by the design of a digital apparatus consisting of the human tongue, soft palate, lips, larynx and so forth.

Building on Jakobson's 'distinctive features' theory, Chomsky's original work was in morphophonemics. During the 1950s, he became inspired by the prospect of extending the 'distinctive features' approach — now hugely successful — far beyond its original field of application. Jakobson had already persuaded a young social anthropologist — Claude Lévi-Strauss — to apply distinctive features theory to the study of kinship systems, in this way inaugurating 'structural anthropology'. Chomsky — who got his job at the Massachusetts Institute of Technology thanks to Jakobson's intervention — hoped to explore the extent to which similar principles might be applied to the various sub-disciplines of linguistics including syntax and semantics.[7] If the phonological component of language was demonstrably rooted in a digital biological 'organ' or 'device', why not the syntactic and semantic components as well? Might not language as a whole prove to be a digital organ or device?

This led Chomsky and his supporters to the idea of 'generative semantics' — the proposal that the speaker generates word and sentence meanings by combining irreducible constituent elements of meaning, each of which can be switched 'on' or 'off'. To produce 'bachelor', using this logic, the relevant component of the brain must switch 'animate', 'human' and 'male' to the 'on' (+) position while keeping 'married' switched 'off' (-). The underlying assumption here is that the requisite conceptual primitives — irreducible notions such as 'animate', 'male', 'human', 'married' and so forth — are genetically determined internal components of the human language organ. While this idea would rapidly encounter intellectual difficulties — sparking controversies culminating in the so-called 'linguistics wars'[8] — it attracted young and ambitious scholars impressed by the recent emergence of computer science and its promise of scientific parsimony and unification. If the theory worked, the simple principle of 'digital infinity' would apply to language as a whole. Linguistics in its entirety might then lay claim to the coveted status of natural science. No part of the discipline — not even semantics — need be contaminated any longer by association with such 'un-scientific' disciplines as cultural anthropology or social science.

References

  1. ^ a b Noam Chomsky, 1991. Linguistics and Cognitive Science: Problems and Mysteries. in Asa Kasher (ed.), The Chomskyan Turn. Oxford: Blackwell, pp. 26-53, p. 50.
  2. ^ Antoine Arnauld and Claude Lancelot, 1975 (1660). The Port-Royal Grammar. The Hague: Mouton, pp. 65-66.
  3. ^ Rene Descartes, 1985 [1637]. 'Discourse on the Method.' In The Philosophical Writings of Descartes. Translated by J. Cottingham, R. Stoothoff and D. Murdoch. Cambridge: Cambridge University Press, Vol. 1, pp. 139-141.
  4. ^ a b Turing, A.M. 1950. Computing machinery and intelligence. Mind 59: 433-60.
  5. ^ Alan Turing, 1950. Computing machinery and intelligence. Mind 59: 433-60
  6. ^ Steven Pinker, 1997. How the Mind Works. London: Allen Lane, Penguin, p. 24.
  7. ^ Chomsky, N. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press, pp. 64-127.
  8. ^ Harris, R. A. 1993. The Linguistics Wars. New York and Oxford: Oxford University Press.